-
Notifications
You must be signed in to change notification settings - Fork 1
fix: change rate limit key #41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Note Currently processing new changes in this PR. This may take a few minutes, please wait... 📒 Files selected for processing (4)
WalkthroughWorkers now rate-limit by client IP (from x-agent or cf-connecting-ip) and include rayId in error handling; wrangler limits reduced from 100 to 10; the test script adds agent_id headers, updates default endpoints, captures responses, and removes the per-request sleep. Changes
Sequence Diagram(s)sequenceDiagram
participant C as Client
participant W as Worker
participant R as RateLimiter
C->>W: HTTP request (includes x-agent / cf-connecting-ip, cf-ray/x-ray-id)
W->>W: Extract clientIP and rayId
W->>R: limit({ key: clientIP })
alt Allowed
R-->>W: ok
W-->>C: proceed with normal response
else Limited
R-->>W: limited
W-->>C: 429 JSON { error, message, code, ip, rayId }
end
Tip CodeRabbit can generate a title for your PR based on the changes with custom instructions.Add the ✨ Finishing Touches
🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
✅ Preview CLIs & Workers are live! Test the CLIs locally under tag npx create-db@pr41
npx create-pg@pr41
npx create-postgres@$pr41 Worker URLs
|
Deploying with
|
Status | Name | Latest Commit | Preview URL | Updated (UTC) |
---|---|---|---|---|
✅ Deployment successful! View logs |
claim-db-worker | 3b51143 | Commit Preview URL Branch Preview URL |
Aug 12 2025, 07:44 AM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (5)
claim-db-worker/src/index.ts
(1 hunks)claim-db-worker/wrangler.jsonc
(1 hunks)create-db-worker/src/index.ts
(1 hunks)create-db-worker/wrangler.jsonc
(1 hunks)tests/test-rate-limits.sh
(4 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
create-db-worker/src/index.ts (2)
claim-db-worker/worker-configuration.d.ts (1)
env
(6799-6799)create-db-worker/worker-configuration.d.ts (1)
env
(6794-6794)
🪛 Shellcheck (0.10.0)
tests/test-rate-limits.sh
[info] 44-44: Double quote to prevent globbing and word splitting.
(SC2086)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Workers Builds: create-db-worker
- GitHub Check: Workers Builds: claim-db-worker
🔇 Additional comments (3)
create-db-worker/wrangler.jsonc (1)
30-32
: Confirm Monitoring & SLO Updates for Reduced Rate LimitThe rate limit has been reduced from 100→10 req/min in both workers:
- create-db-worker/wrangler.jsonc (lines 30–32)
- claim-db-worker/wrangler.jsonc (lines 27–29)
Please ensure you:
- Validate that associated SLOs and error budgets accommodate the new 10 req/min threshold
- Update any dashboards or alerts (dev/staging/prod) referencing the old limit
- (Optional) Add an inline runbook comment in each wrangler.jsonc documenting the rationale and expected behavior
tests/test-rate-limits.sh (2)
17-19
: Header redundancy and PII
- x-Agent and x-agent are the same header (case-insensitive); sending both is redundant.
- User-Agent and custom IDs may be logged server-side; ensure this is acceptable.
-echo "Agent Header: X-Agent: $AGENT_ID" -echo "User-Agent: prisma-rate-limit-test/$AGENT_ID" +echo "Agent Header: x-agent: $AGENT_ID" +echo "User-Agent: prisma-rate-limit-test/$AGENT_ID"Likely an incorrect or invalid review comment.
8-10
: Default production URLs are intentional—no change needed
This script is a manual utility meant to validate your live rate-limit endpoints. Since you can already overrideCREATE_DB_URL
andCLAIM_DB_URL
via positional arguments, defaulting to the public URLs is expected behavior.Likely an incorrect or invalid review comment.
// Use client IP for consistent rate limiting across environments | ||
const clientIP = request.headers.get('x-agent') || request.headers.get('cf-connecting-ip'); | ||
const { success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP! }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Do not trust client header over CF IP; add fallbacks and avoid null keys
Using x-agent before cf-connecting-ip allows trivial rate-limit evasion by rotating x-agent. Also, clientIP can be null in some envs; the non-null assertion !
only silences TS, not runtime.
Refactor to prefer CF-provided IP, then standard proxy headers, then a dev/testing override, and finally a safe default.
- // Use client IP for consistent rate limiting across environments
- const clientIP = request.headers.get('x-agent') || request.headers.get('cf-connecting-ip');
- const { success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP! });
+ // Use client IP for consistent rate limiting across environments (prefer CF header).
+ const clientIP =
+ request.headers.get('cf-connecting-ip') ||
+ request.headers.get('x-forwarded-for')?.split(',')[0]?.trim() ||
+ request.headers.get('x-real-ip') ||
+ // Allow explicit test override last
+ request.headers.get('x-agent') ||
+ 'anonymous';
+ let success = true;
+ try {
+ ({ success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP }));
+ } catch (err) {
+ // Fail-open to avoid blocking in case the binding is unavailable; adjust if you prefer fail-closed.
+ console.error('RateLimiter error', err);
+ success = true;
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Use client IP for consistent rate limiting across environments | |
const clientIP = request.headers.get('x-agent') || request.headers.get('cf-connecting-ip'); | |
const { success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP! }); | |
// Use client IP for consistent rate limiting across environments (prefer CF header). | |
const clientIP = | |
request.headers.get('cf-connecting-ip') || | |
request.headers.get('x-forwarded-for')?.split(',')[0]?.trim() || | |
request.headers.get('x-real-ip') || | |
// Allow explicit test override last | |
request.headers.get('x-agent') || | |
'anonymous'; | |
let success = true; | |
try { | |
({ success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP })); | |
} catch (err) { | |
// Fail-open to avoid blocking in case the binding is unavailable; adjust if you prefer fail-closed. | |
console.error('RateLimiter error', err); | |
success = true; | |
} |
🤖 Prompt for AI Agents
In claim-db-worker/src/index.ts around lines 33 to 35, the code currently trusts
the x-agent header first and uses a non-null assertion for the rate-limit key
which can be null at runtime; instead, prefer the Cloudflare-provided IP
(cf-connecting-ip), then standard proxy headers like x-forwarded-for (taking the
left-most IP), then a controlled dev/testing override header (e.g.
x-dev-client-ip), and finally a safe default string (e.g. "unknown" or
"127.0.0.1"); remove the non-null assertion and build a validated key string
before calling env.CLAIM_DB_RATE_LIMITER.limit({ key }), ensuring you trim and
normalize the chosen header value so it cannot be empty and falls back to the
default.
"limit": 10, | ||
"period": 60, | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
CLAIM_DB limit also tightened to 10 req/min — align expectations
Ensure clients and tests reflect the new ceiling to avoid false alarms. Consider staggered rollout or per-env overrides if needed.
🏁 Script executed:
#!/bin/bash
# Show the wrangler.jsonc around the rate limit config
echo "---- claim-db-worker/wrangler.jsonc (lines 1-100) ----"
sed -n '1,100p' claim-db-worker/wrangler.jsonc || true
echo -e "\n---- All occurrences of 'limit' in JSON/JSONC files ----"
rg --type-add 'jsonc:*.jsonc' --type-add 'json:*.json' -tjson -tjsonc '"limit"' -C 3
echo -e "\n---- References to rate limiting in code/tests ----"
rg -n "limit" -e "claim" -e "db" || true
rg -n "rate" -e "limit" || true
echo -e "\n---- Test files mentioning 10 or 60 ----"
rg -n "10" --glob '*test*' || true
rg -n "60" --glob '*test*' || true
Length of output: 2323
CLAIM_DB rate limit lowered to 10 req/min—update clients & tests
We’ve tightened CLAIM_DB_RATE_LIMITER
(limit: 10, period: 60). Any clients or tests assuming a higher ceiling will now hit this threshold. To avoid failures or false alarms:
• tests/test-rate-limits.sh: lower the default TEST_COUNT
(currently 110) to ≤ 10 or make it configurable per-env
• Client integrations/docs that reference CLAIM_DB throughput > 10 req/min
• Consider per-environment overrides or a staggered rollout if some workloads need more headroom
🤖 Prompt for AI Agents
In claim-db-worker/wrangler.jsonc around lines 27 to 29, the CLAIM_DB rate
limiter was reduced to 10 requests per 60 seconds which will break tests and
clients that assume higher throughput; update tests/test-rate-limits.sh to lower
TEST_COUNT to 10 or make TEST_COUNT configurable via env vars and default to
≤10, update any client integrations and documentation that reference >10 req/min
to reflect the new limit, and add a per-environment override (env var) or note a
staged rollout plan for workloads that need higher headroom.
|
||
# Test Rate Limits Script | ||
# Usage: ./tests/test-rate-limits.sh [test_count] [create_db_url] [claim_db_url] | ||
# Usage: ./tests/test-rate-limits.sh [test_count] [create_db_url] [claim_db_url] [agent_id] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Add strict mode and safer loop quoting
Improve robustness and satisfy shellcheck.
# Test Rate Limits Script
-# Usage: ./tests/test-rate-limits.sh [test_count] [create_db_url] [claim_db_url] [agent_id]
+# Usage: ./tests/test-rate-limits.sh [test_count] [create_db_url] [claim_db_url] [agent_id]
+set -Eeuo pipefail
Also change the loop and seq usage:
- for i in $(seq 1 $TEST_COUNT); do
+ for i in $(seq 1 "${TEST_COUNT}"); do
🤖 Prompt for AI Agents
In tests/test-rate-limits.sh around line 4, the script lacks strict shell
settings and uses unsafe loop/seq and unquoted variables; enable robust shell
behavior by adding at the top: set -euo pipefail and IFS=$'\n\t', then update
the loop to avoid unquoted seq and word-splitting (for example replace
constructs like for i in $(seq 1 $test_count) with for i in $(seq 1
"$test_count") or preferably a POSIX-safe while loop using a counter), and
ensure all variable expansions in the loop and commands are quoted (e.g.,
"$create_db_url" "$claim_db_url" "$agent_id") to satisfy shellcheck and prevent
word-splitting.
# Make the request with unique agent headers and capture body + status code | ||
response=$(curl -s \ | ||
-H "x-agent: $AGENT_ID" \ | ||
-H "x-Agent: $AGENT_ID" \ | ||
-H "User-Agent: prisma-rate-limit-test/$AGENT_ID" \ | ||
-w "%{http_code}" \ | ||
-o /tmp/response_$i.json \ | ||
"$endpoint" 2>/dev/null) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Quote variables (SC2086), remove duplicate header, and safe temp files
- Quote expansions to avoid word splitting/globbing and satisfy shellcheck.
- Remove duplicate x-Agent header.
- Use mktemp and trap for cleanup.
- # Make the request with unique agent headers and capture body + status code
- response=$(curl -s \
- -H "x-agent: $AGENT_ID" \
- -H "x-Agent: $AGENT_ID" \
- -H "User-Agent: prisma-rate-limit-test/$AGENT_ID" \
- -w "%{http_code}" \
- -o /tmp/response_$i.json \
- "$endpoint" 2>/dev/null)
+ # Make the request with unique agent header and capture body + status code
+ tmp_dir="${TMP_DIR:-$(mktemp -d)}"
+ trap 'rm -rf "$tmp_dir"' EXIT
+ response="$(
+ curl -s \
+ -H "x-agent: ${AGENT_ID}" \
+ -H "User-Agent: prisma-rate-limit-test/${AGENT_ID}" \
+ -w "%{http_code}" \
+ -o "${tmp_dir}/response_${i}.json" \
+ "${endpoint}" 2>/dev/null
+ )"
🧰 Tools
🪛 Shellcheck (0.10.0)
[info] 44-44: Double quote to prevent globbing and word splitting.
(SC2086)
🤖 Prompt for AI Agents
In tests/test-rate-limits.sh around lines 38 to 46, the curl call uses unquoted
variable expansions, repeats the x-Agent header, and writes to a fixed temp
path; fix it by removing the duplicate header, quoting all variable expansions
(e.g. "$AGENT_ID" and "$endpoint") to prevent word-splitting/globbing, create a
unique temp file with mktemp (e.g. tmp=$(mktemp)) and write curl output to that
quoted path, and add a trap to remove the temp file on exit to ensure safe
cleanup.
|
||
# Small delay between requests | ||
sleep 0.1 | ||
#sleep 0.05 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Optional: throttle to reduce noisy bursts
Consider re-introducing a small sleep (e.g., 20–50ms) to avoid overwhelming remote endpoints during ad-hoc runs.
🤖 Prompt for AI Agents
In tests/test-rate-limits.sh around line 65, the commented-out small sleep
("#sleep 0.05") should be re-enabled to throttle bursts; uncomment it and use a
portable short delay such as sleep 0.02–0.05 (e.g., sleep 0.02 or sleep 0.05) to
reduce request bursts during ad-hoc runs, or if sub-second precision is required
in the environment, replace with a portable usleep/timeout helper that sleeps
for 20–50ms.
echo "" | ||
echo "💡 To test with your actual deployed URLs, run:" | ||
echo " ./tests/test-rate-limits.sh 110 https://create-db-temp.prisma.io https://create-db.prisma.io" | ||
echo " ./tests/test-rate-limits.sh 110 https://create-db-temp.prisma.io https://create-db.prisma.io my-local-agent" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Example invocation: consider demonstrating local defaults
Recommend showing a local/miniflare example first to prevent accidental production load.
🤖 Prompt for AI Agents
In tests/test-rate-limits.sh around line 89, the example invocation only shows
production endpoints which may cause accidental load; add a local/miniflare
first so users see a safe default. Update the echo line to print a local example
(e.g., using localhost or miniflare URL and a local agent name) before the
existing production example, ensuring both examples are clearly separated and
labeled (local first, then production).
✅ Preview CLIs & Workers are live! Test the CLIs locally under tag npx create-db@pr41
npx create-pg@pr41
npx create-postgres@$pr41 Worker URLs
|
✅ Preview CLIs & Workers are live! Test the CLIs locally under tag npx create-db@pr41
npx create-pg@pr41
npx create-postgres@$pr41 Worker URLs
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
claim-db-worker/src/index.ts (1)
33-37
: Do not trust client-controlled headers for rate-limit key; prefer CF IP, add safe fallback, and handle limiter errorsPrioritize cf-connecting-ip, add standard fallbacks, remove non-null assertion, and wrap the limiter call to avoid hard failures when the binding is unavailable.
- // Use client IP for consistent rate limiting across environments - const clientIP = request.headers.get('x-agent') || request.headers.get('cf-connecting-ip'); - const rayId = request.headers.get('cf-ray') || request.headers.get('x-ray-id') || 'unknown-ray-id'; - - const { success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP! }); + // Use client IP for consistent rate limiting across environments (prefer CF header). + const clientIP = + request.headers.get('cf-connecting-ip') || + request.headers.get('x-forwarded-for')?.split(',')[0]?.trim() || + request.headers.get('x-real-ip') || + // Allow explicit test override last + request.headers.get('x-agent') || + 'anonymous'; + const rayId = request.headers.get('cf-ray') || request.headers.get('x-ray-id') || 'unknown-ray-id'; + + let success = true; + try { + ({ success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP })); + } catch (err) { + console.error('[ratelimit] binding error', err); + success = true; // fail-open; switch to false for fail-closed behavior + }create-db-worker/src/index.ts (1)
15-22
: Harden rate-limit key selection; avoid null keys; don’t prioritize client-controlled header; handle limiter failuresPrefer cf-connecting-ip, add standard fallbacks, remove the non-null assertion, avoid logging raw IPs, and wrap the limiter call to handle binding errors gracefully.
- // Use client IP for consistent rate limiting across environments - const clientIP = request.headers.get('x-agent') || request.headers.get('cf-connecting-ip'); - const rayId = request.headers.get('cf-ray') || request.headers.get('x-ray-id') || 'unknown-ray-id'; - - console.log(`Client IP: ${clientIP} - Request URL: ${request.url}`); - - const { success } = await env.CREATE_DB_RATE_LIMITER.limit({ key: clientIP! }); + // Use client IP for consistent rate limiting across environments (prefer CF header). + const clientIP = + request.headers.get('cf-connecting-ip') || + request.headers.get('x-forwarded-for')?.split(',')[0]?.trim() || + request.headers.get('x-real-ip') || + // Allow explicit test override last + request.headers.get('x-agent') || + 'anonymous'; + const rayId = request.headers.get('cf-ray') || request.headers.get('x-ray-id') || 'unknown-ray-id'; + + // Avoid logging raw IPs in production. + // console.log(`[ratelimit] key present: ${Boolean(clientIP)} - Request URL: ${request.url}`); + + let success = true; + try { + ({ success } = await env.CREATE_DB_RATE_LIMITER.limit({ key: clientIP })); + } catch (err) { + console.error('[ratelimit] binding error', err); + success = true; // fail-open; switch to false for fail-closed behavior + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (2)
claim-db-worker/src/index.ts
(1 hunks)create-db-worker/src/index.ts
(2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
create-db-worker/src/index.ts (2)
claim-db-worker/worker-configuration.d.ts (1)
env
(6799-6799)create-db-worker/worker-configuration.d.ts (1)
env
(6794-6794)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Workers Builds: claim-db-worker
- GitHub Check: Workers Builds: create-db-worker
🔇 Additional comments (2)
claim-db-worker/src/index.ts (1)
35-35
: Ray ID extraction LGTMUsing cf-ray with a fallback is sensible for correlating logs and client responses.
create-db-worker/src/index.ts (1)
49-49
: LGTM: clarifies the test endpoint messageThe updated message is concise and accurate.
console.log(`Rate limit exceeded for IP: ${clientIP}. Ray ID: ${rayId}. Request blocked to prevent abuse.`); | ||
return new Response( | ||
JSON.stringify({ | ||
error: 'Too Many Requests', | ||
message: 'You have exceeded the allowed number of requests. Please wait before trying again.', | ||
code: 429, | ||
ip: clientIP, | ||
rayId, | ||
}), | ||
{ | ||
status: 429, | ||
headers: { 'Content-Type': 'application/json' }, | ||
}, | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Minimize PII: avoid logging/echoing raw IP; return only non-sensitive identifiers
Logging IPs and returning them in responses can be considered PII. Prefer logging rayId only and omit the ip field (or rename to a generic key if truly needed).
- console.log(`Rate limit exceeded for IP: ${clientIP}. Ray ID: ${rayId}. Request blocked to prevent abuse.`);
+ console.log(`[429] Rate limit exceeded. Ray ID: ${rayId}. Request blocked to prevent abuse.`);
return new Response(
JSON.stringify({
error: 'Too Many Requests',
message: 'You have exceeded the allowed number of requests. Please wait before trying again.',
code: 429,
- ip: clientIP,
rayId,
}),
{
status: 429,
headers: { 'Content-Type': 'application/json' },
},
);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
console.log(`Rate limit exceeded for IP: ${clientIP}. Ray ID: ${rayId}. Request blocked to prevent abuse.`); | |
return new Response( | |
JSON.stringify({ | |
error: 'Too Many Requests', | |
message: 'You have exceeded the allowed number of requests. Please wait before trying again.', | |
code: 429, | |
ip: clientIP, | |
rayId, | |
}), | |
{ | |
status: 429, | |
headers: { 'Content-Type': 'application/json' }, | |
}, | |
); | |
console.log(`[429] Rate limit exceeded. Ray ID: ${rayId}. Request blocked to prevent abuse.`); | |
return new Response( | |
JSON.stringify({ | |
error: 'Too Many Requests', | |
message: 'You have exceeded the allowed number of requests. Please wait before trying again.', | |
code: 429, | |
rayId, | |
}), | |
{ | |
status: 429, | |
headers: { 'Content-Type': 'application/json' }, | |
}, | |
); |
🤖 Prompt for AI Agents
In claim-db-worker/src/index.ts around lines 40 to 53, the handler logs and
returns the client's raw IP and includes it in the JSON response (PII). Remove
the raw IP from logs and response: change console.log to only mention the rayId
(and any non-PII token), and remove the ip property from the returned JSON (or
replace it with a non-identifying generic key like clientTag if absolutely
required). Ensure headers and status remain unchanged and do not expose the IP
elsewhere in this block.
console.log(`Rate limit exceeded for IP: ${clientIP}. Ray ID: ${rayId}. Request blocked to prevent abuse.`); | ||
return new Response( | ||
JSON.stringify({ | ||
error: 'Too Many Requests', | ||
message: 'You have exceeded the allowed number of requests. Please wait before trying again.', | ||
code: 429, | ||
ip: clientIP, | ||
rayId, | ||
}), | ||
{ | ||
status: 429, | ||
headers: { 'Content-Type': 'application/json' }, | ||
}, | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Reduce PII exposure in 429 path
Avoid logging IPs and do not echo them back; rely on rayId for correlation.
- console.log(`Rate limit exceeded for IP: ${clientIP}. Ray ID: ${rayId}. Request blocked to prevent abuse.`);
+ console.log(`[429] Rate limit exceeded. Ray ID: ${rayId}. Request blocked to prevent abuse.`);
return new Response(
JSON.stringify({
error: 'Too Many Requests',
message: 'You have exceeded the allowed number of requests. Please wait before trying again.',
code: 429,
- ip: clientIP,
rayId,
}),
{
status: 429,
headers: { 'Content-Type': 'application/json' },
},
);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
console.log(`Rate limit exceeded for IP: ${clientIP}. Ray ID: ${rayId}. Request blocked to prevent abuse.`); | |
return new Response( | |
JSON.stringify({ | |
error: 'Too Many Requests', | |
message: 'You have exceeded the allowed number of requests. Please wait before trying again.', | |
code: 429, | |
ip: clientIP, | |
rayId, | |
}), | |
{ | |
status: 429, | |
headers: { 'Content-Type': 'application/json' }, | |
}, | |
); | |
console.log(`[429] Rate limit exceeded. Ray ID: ${rayId}. Request blocked to prevent abuse.`); | |
return new Response( | |
JSON.stringify({ | |
error: 'Too Many Requests', | |
message: 'You have exceeded the allowed number of requests. Please wait before trying again.', | |
code: 429, | |
rayId, | |
}), | |
{ | |
status: 429, | |
headers: { 'Content-Type': 'application/json' }, | |
}, | |
); |
🤖 Prompt for AI Agents
In create-db-worker/src/index.ts around lines 24 to 37, the 429 response path
currently logs and returns the client's IP and echoes it in the JSON body;
remove PII exposure by eliminating any use of clientIP in logs and the response
body, keep and log only the rayId for correlation, and update the JSON payload
to exclude the ip field while still returning status 429 and Content-Type
application/json; ensure any console logs reference only non-PII context (e.g.,
"Rate limit exceeded. Ray ID: {rayId}") and that clientIP is not stored or
returned.
✅ Preview CLIs & Workers are live! Test the CLIs locally under tag npx create-db@pr41
npx create-pg@pr41
npx create-postgres@$pr41 Worker URLs
|
Summary by CodeRabbit
New Features
Refactor
Tests